Automated Assessment of Programming Assignments: Visual Feedback, Assignment Mobility, and Assessment of Students' Testing Skills
نویسندگان
چکیده
Aalto University, P.O. Box 11000, FI-00076 Aalto www.aalto.fi Author Petri Ihantola Name of the doctoral dissertation Automated Assessment of Programming Assignments: Visual Feedback, Assignment Mobility, and Assessment of Students' Testing Skills Publisher Aalto University School of Science Unit Department of Computer Science and Engineering Series Aalto University publication series DOCTORAL DISSERTATIONS 131/2011 Field of research Software Systems Manuscript submitted 12 July 2011 Manuscript revised 24 October 2011 Date of the defence 9 December 2011 Language English Monograph Article dissertation (summary + original articles) Abstract The main objective of this thesis is to improve the automated assessment of programming assignments from the perspective of assessment tool developers. We have developed visual feedback on functionality of students' programs and explored methods to control the level of detail in visual feedback. We have found that visual feedback does not require major changes to existing assessment platforms. Most modern platforms are web based, creating an opportunity to describe visualizations in JavaScript and HTML embedded into textual feedback. Our preliminary results on the effectiveness of automatic visual feedback indicate that students perform equally well with visual and textual feedback. However, visual feedback based on automatically extracted object graphs can take less time to prepare than textual feedback of good quality. We have also developed programming assignments that are easier to port from one server environment to another by performing assessment on the client-side. This not only makes it easier to use the same assignments in different server environments but also removes the need for sandboxing the execution of students' programs. The approach will likely become more important in the future together with interactive study materials becoming more popular. Client-side assessment is more suitable for self-studying material than for grading because assessment results sent by a client are often too easy to falsify. Testing is an important part of programming and automated assessment should also cover students' self-written tests. We have analyzed how students behave when they are rewarded for structural test coverage (e.g. line coverage) and found that this can lead students to write tests with good coverage but with poor ability to detect faulty programs. Mutation analysis, where a large number of (faulty) programs are automatically derived from the program under test, turns out to be an effective way to detect tests otherwise fooling our assessment systems. Applying mutation analysis directly for grading is problematic because some of the derived programs are equivalent with the original and some assignments or solution strategies generate more equivalent mutants than others.The main objective of this thesis is to improve the automated assessment of programming assignments from the perspective of assessment tool developers. We have developed visual feedback on functionality of students' programs and explored methods to control the level of detail in visual feedback. We have found that visual feedback does not require major changes to existing assessment platforms. Most modern platforms are web based, creating an opportunity to describe visualizations in JavaScript and HTML embedded into textual feedback. Our preliminary results on the effectiveness of automatic visual feedback indicate that students perform equally well with visual and textual feedback. However, visual feedback based on automatically extracted object graphs can take less time to prepare than textual feedback of good quality. We have also developed programming assignments that are easier to port from one server environment to another by performing assessment on the client-side. This not only makes it easier to use the same assignments in different server environments but also removes the need for sandboxing the execution of students' programs. The approach will likely become more important in the future together with interactive study materials becoming more popular. Client-side assessment is more suitable for self-studying material than for grading because assessment results sent by a client are often too easy to falsify. Testing is an important part of programming and automated assessment should also cover students' self-written tests. We have analyzed how students behave when they are rewarded for structural test coverage (e.g. line coverage) and found that this can lead students to write tests with good coverage but with poor ability to detect faulty programs. Mutation analysis, where a large number of (faulty) programs are automatically derived from the program under test, turns out to be an effective way to detect tests otherwise fooling our assessment systems. Applying mutation analysis directly for grading is problematic because some of the derived programs are equivalent with the original and some assignments or solution strategies generate more equivalent mutants than others.
منابع مشابه
Paperless Subjective Programming Assignment Assessment: A First Step
A significant component of student evaluation includes the objective and subjective assessment of programming assignments. We describe bits and pieces of a paperless electronic workflow we’ve recently used to provide objective and subjective feedback to students, emphasizing the tools and process used to provide subjective programming assignment assessment. We identify opportunities for future ...
متن کاملIntegrating Portfolio-Assessment into the Writing Process: Does it Affect a Significant Change in Iranian EFL Undergraduates’ Writing Achievement? A Mixed-Methods Study
The paradigm shift from testing the outcome to assessing the learning of process shines a light on the alternative assessment approaches, among which portfolio-assessment has sparked researchers’ interest in writing instruction. This study aimed at investigating the effect of portfolio-assessment on Iranian EFL students’ writing achievement through the process-centered approach to writi...
متن کاملMarking complex assignments using peer assessment with an electronic voting system and an automated feedback tool
The work described in this paper relates to the development and use of a range of initiatives in order to mark complex masters’ level assignments related to the development of computer web applications. In the past such assignments have proven difficult to mark since they assess a range of skills including programming, human computer interaction and design. Based on the experience of several ye...
متن کاملA comparison of electronic and paper-based assignment submission and feedback
This paper presents the results of a study evaluating student perceptions of online assignment submission. 47 students submitted assignments and received feedback via features within the Virtual Learning Environment BlackboardTM. The students then completed questionnaires comparing their experience of online submission and feedback with traditional methods. Results indicated that 88% of student...
متن کاملDoes formal education improve students' reading skills? Ilam University of Medical Sciences experience
Introduction and purpose: Evidence suggests that improving study skills can improve students' academic performance and learning. Since most medical science curricula do not have a coherent curriculum to teach study skills, this study aimed to formulate, implement, and evaluate study skills lessons and examine their effects on the academic skill level of Ilam University of Medical Sciences stude...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2011